Jain等人引入的倾向模型。2016年已成为处理极端多标签分类(XMLC)中缺失和长尾标签的标准方法。在本文中,我们对这种方法进行批判性修订,表明尽管具有理论性,但其在当代XMLC作品中的应用仍是有争议的。我们详尽地讨论了基于倾向的方法的缺陷,并提出了几种食谱,其中一些与搜索引擎和推荐系统中使用的解决方案有关,我们认为这构成了XMLC中遵循的有希望的替代方案。
translated by 谷歌翻译
extreme Multilabel文本分类(XMTC)是一个文本分类问题,其中(i)输出空间非常大,(ii)每个数据点可以具有多个正标签,并且(iii)数据遵循强不平衡的分布。通过在推荐系统中的应用和网络级文档的自动标记,对XMTC的研究一直专注于提高预测准确性并处理不平衡数据。然而,基于深度学习的XMTC模型对抗对抗示例的鲁棒性已经很大程度上是曝光率的。在本文中,我们调查了对抗攻击下XMTC模型的行为。为此,首先,我们定义了多标签文本分类问题中的对抗攻击。我们将攻击多书文本分类器分类为(a)正面目标,其中目标正标签应落在Top-K预测标签中,(b)负面目标,其中目标负面标签应该是预测的顶部K之间标签。然后,通过对APLC-XLNET和IppersionXML的实验,我们显示XMTC模型非常容易受到正面目标的攻击,但对负数目标的攻击更加强大。此外,我们的实验表明,积极靶向对抗攻击的成功率具有不平衡的分布。更精确地,尾部类易受对抗的攻击攻击,攻击者可以产生与实际数据点高相似性的对抗性样本。为了克服这个问题,我们探讨了XMTC中重新平衡损失函数的影响,不仅它们不仅增加了尾班的准确性,而且还可以提高这些课程对抗对抗攻击的鲁棒性。我们的实验的代码可以在https://github.com/xmc-aalto/adv-xmtc上获得
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Long-term OCR services aim to provide high-quality output to their users at competitive costs. It is essential to upgrade the models because of the complex data loaded by the users. The service providers encourage the users who provide data where the OCR model fails by rewarding them based on data complexity, readability, and available budget. Hitherto, the OCR works include preparing the models on standard datasets without considering the end-users. We propose a strategy of consistently upgrading an existing Handwritten Hindi OCR model three times on the dataset of 15 users. We fix the budget of 4 users for each iteration. For the first iteration, the model directly trains on the dataset from the first four users. For the rest iteration, all remaining users write a page each, which service providers later analyze to select the 4 (new) best users based on the quality of predictions on the human-readable words. Selected users write 23 more pages for upgrading the model. We upgrade the model with Curriculum Learning (CL) on the data available in the current iteration and compare the subset from previous iterations. The upgraded model is tested on a held-out set of one page each from all 23 users. We provide insights into our investigations on the effect of CL, user selection, and especially the data from unseen writing styles. Our work can be used for long-term OCR services in crowd-sourcing scenarios for the service providers and end users.
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
We explore unifying a neural segmenter with two-pass cascaded encoder ASR into a single model. A key challenge is allowing the segmenter (which runs in real-time, synchronously with the decoder) to finalize the 2nd pass (which runs 900 ms behind real-time) without introducing user-perceived latency or deletion errors during inference. We propose a design where the neural segmenter is integrated with the causal 1st pass decoder to emit a end-of-segment (EOS) signal in real-time. The EOS signal is then used to finalize the non-causal 2nd pass. We experiment with different ways to finalize the 2nd pass, and find that a novel dummy frame injection strategy allows for simultaneous high quality 2nd pass results and low finalization latency. On a real-world long-form captioning task (YouTube), we achieve 2.4% relative WER and 140 ms EOS latency gains over a baseline VAD-based segmenter with the same cascaded encoder.
translated by 谷歌翻译
Damage to the inferior frontal gyrus (Broca's area) can cause agrammatic aphasia wherein patients, although able to comprehend, lack the ability to form complete sentences. This inability leads to communication gaps which cause difficulties in their daily lives. The usage of assistive devices can help in mitigating these issues and enable the patients to communicate effectively. However, due to lack of large scale studies of linguistic deficits in aphasia, research on such assistive technology is relatively limited. In this work, we present two contributions that aim to re-initiate research and development in this field. Firstly, we propose a model that uses linguistic features from small scale studies on aphasia patients and generates large scale datasets of synthetic aphasic utterances from grammatically correct datasets. We show that the mean length of utterance, the noun/verb ratio, and the simple/complex sentence ratio of our synthetic datasets correspond to the reported features of aphasic speech. Further, we demonstrate how the synthetic datasets may be utilized to develop assistive devices for aphasia patients. The pre-trained T5 transformer is fine-tuned using the generated dataset to suggest 5 corrected sentences given an aphasic utterance as input. We evaluate the efficacy of the T5 model using the BLEU and cosine semantic similarity scores. Affirming results with BLEU score of 0.827/1.00 and semantic similarity of 0.904/1.00 were obtained. These results provide a strong foundation for the concept that a synthetic dataset based on small scale studies on aphasia can be used to develop effective assistive technology.
translated by 谷歌翻译
This work introduces the novel task of Source-free Multi-target Domain Adaptation and proposes adaptation framework comprising of \textbf{Co}nsistency with \textbf{N}uclear-Norm Maximization and \textbf{Mix}Up knowledge distillation (\textit{CoNMix}) as a solution to this problem. The main motive of this work is to solve for Single and Multi target Domain Adaptation (SMTDA) for the source-free paradigm, which enforces a constraint where the labeled source data is not available during target adaptation due to various privacy-related restrictions on data sharing. The source-free approach leverages target pseudo labels, which can be noisy, to improve the target adaptation. We introduce consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, we propose novel MixUp Knowledge Distillation (MKD) for better generalization on multiple target domains using various source-free STDA models. We also show that the Vision Transformer (VT) backbone gives better feature representation with improved domain transferability and class discriminability. Our proposed framework achieves the state-of-the-art (SOTA) results in various paradigms of source-free STDA and MTDA settings on popular domain adaptation datasets like Office-Home, Office-Caltech, and DomainNet. Project Page: https://sites.google.com/view/conmix-vcl
translated by 谷歌翻译
艺术是一种使用数字技术作为生成或创造过程的一部分的艺术方法。随着数字货币和NFT(不可杀死的代币)的出现,对数字艺术的需求正在积极增长。在本手稿中,我们主张将深层生成网络和对抗性训练进行稳定和变体的艺术生成的概念。这项工作主要集中于使用深卷积生成对抗网络(DC-GAN),并探讨了解决GAN训练中常见陷阱的技术。我们比较DC-GAN的各种架构和设计,以为稳定而逼真的一代提供推荐的设计选择。这项工作的主要重点是生成现实中不存在但由提议的模型从随机噪声中合成的逼真图像。我们提供了生成的动物面部图像(一些显示物种混合物的证据)的视觉结果以及训练,建筑和设计选择的建议。我们还展示了训练图像预处理如何在GAN培训中起着重要作用。
translated by 谷歌翻译
该手稿解决了预测出院后全因住院再入院或死亡的同时问题,并量化放电放置在防止这些不良事件中的影响。为此,我们开发了一个固有的可解释的多级贝叶斯建模框架,该框架灵感来自重新激活的深神经网络的分段线性。在生存模型中,我们明确调整了混淆,以量化局部平均治疗效果以进行放电的干预措施。从2008年和2011年开始,我们对5%的Medicare受益人样本进行了培训,然后在2012年的索赔中测试了该模型。该模型对30天全因素外的再选中(使用官方CMS方法定义)的分类精度进行了评估,该模型对XGBoost,Logistic回归(功能工程后)和对同一数据进行训练的贝叶斯深神经网络的执行方式相似。该模型对30天的分类任务进行了预测的30天分类任务,该任务是使用剩下的未来数据进行测试,该模型的AUROC约为0.76,AUPRC约为0.50(相对于测试数据中的总体阳性速率),AUPRC的AUPRC达到了约0.76,而AUPRC的AUPRC则达到了AUPRC,则获得了AUPRC。证明人们不需要为准确性而牺牲可解释性。此外,该模型的测试AUROC为0.78,分类为90天全因素外再入院或死亡。我们很容易地凝视着我们固有的可解释模型,总结了其主要发现。此外,我们演示了Black-box Perthoc解释器工具的形状如何生成不受拟合模型支持的解释 - 如果以面值为单位,则没有提供足够的上下文来使模型可操作。
translated by 谷歌翻译